81 research outputs found
Find your Way by Observing the Sun and Other Semantic Cues
In this paper we present a robust, efficient and affordable approach to
self-localization which does not require neither GPS nor knowledge about the
appearance of the world. Towards this goal, we utilize freely available
cartographic maps and derive a probabilistic model that exploits semantic cues
in the form of sun direction, presence of an intersection, road type, speed
limit as well as the ego-car trajectory in order to produce very reliable
localization results. Our experimental evaluation shows that our approach can
localize much faster (in terms of driving time) with less computation and more
robustly than competing approaches, which ignore semantic information
MapPrior: Bird's-Eye View Map Layout Estimation with Generative Models
Despite tremendous advancements in bird's-eye view (BEV) perception, existing
models fall short in generating realistic and coherent semantic map layouts,
and they fail to account for uncertainties arising from partial sensor
information (such as occlusion or limited coverage). In this work, we introduce
MapPrior, a novel BEV perception framework that combines a traditional
discriminative BEV perception model with a learned generative model for
semantic map layouts. Our MapPrior delivers predictions with better accuracy,
realism, and uncertainty awareness. We evaluate our model on the large-scale
nuScenes benchmark. At the time of submission, MapPrior outperforms the
strongest competing method, with significantly improved MMD and ECE scores in
camera- and LiDAR-based BEV perception
CASA: Category-agnostic Skeletal Animal Reconstruction
Recovering the skeletal shape of an animal from a monocular video is a
longstanding challenge. Prevailing animal reconstruction methods often adopt a
control-point driven animation model and optimize bone transforms individually
without considering skeletal topology, yielding unsatisfactory shape and
articulation. In contrast, humans can easily infer the articulation structure
of an unknown animal by associating it with a seen articulated character in
their memory. Inspired by this fact, we present CASA, a novel Category-Agnostic
Skeletal Animal reconstruction method consisting of two major components: a
video-to-shape retrieval process and a neural inverse graphics framework.
During inference, CASA first retrieves an articulated shape from a 3D character
assets bank so that the input video scores highly with the rendered image,
according to a pretrained language-vision model. CASA then integrates the
retrieved character into an inverse graphics framework and jointly infers the
shape deformation, skeleton structure, and skinning weights through
optimization. Experiments validate the efficacy of CASA regarding shape
reconstruction and articulation. We further demonstrate that the resulting
skeletal-animated characters can be used for re-animation.Comment: Accepted to NeurIPS 202
- …